Streaming events
When you create a Response with
stream set to true, the server will emit server-sent events to the
client as the Response is generated. This section contains the events that
are emitted by the server.
response.created
An event that is emitted when a response is created.
The response that was created.
An error object returned when the model fails to generate a Response.
Details about why the response is incomplete.
A system (or developer) message inserted into the model's context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
A list of one or many input items to the model, containing different content types.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
An item representing part of the context for the response to be generated by the model. Can contain text, images, and audio inputs, as well as previous assistant responses and tool call outputs.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The status of item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The output of a computer tool call.
A computer screenshot image used with the computer use tool.
The safety checks reported by the API that have been acknowledged by the developer.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to run a function. See the function calling guide for more information.
The output of a function tool call.
Text, image, or file output of the function tool call.
A text input to the model.
An image input to the model. Learn about image inputs
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The unique ID of the function tool call output. Populated when this item is returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
The output of a local shell tool call.
A tool representing a request to execute one or more shell commands.
The shell commands and limits that describe how to run the tool call.
The streamed output items emitted by a shell tool call.
Captured chunks of stdout and stderr output, along with their associated outcomes.
The exit or timeout outcome associated with this shell call.
Indicates that the shell call exceeded its configured time limit.
A tool call representing a request to create, delete, or update files using diff patches.
The specific create, delete, or update instruction for the apply_patch tool call.
Instruction for creating a new file via the apply_patch tool.
Instruction for deleting an existing file via the apply_patch tool.
The streamed output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A response to an MCP approval request.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
The output of a custom tool call from your code, being sent back to the model.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
Text, image, or file output of the custom tool call.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
A call to a custom tool created by the model.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
An array of content items generated by the model.
- The length and order of items in the
outputarray is dependent on the model's response. - Rather than accessing the first item in the
outputarray and assuming it's anassistantmessage with the content generated by the model, you might consider using theoutput_textproperty where supported in SDKs.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to run a function. See the function calling guide for more information.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
A tool call that executes one or more shell commands in a managed environment.
The shell commands and limits that describe how to run the tool call.
The output of a shell tool call.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
An array of shell call output contents
Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.
Indicates that the shell call exceeded its configured time limit.
A tool call that applies file diffs by creating, deleting, or updating files.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
Instruction describing how to create a file via the apply_patch tool.
Instruction describing how to delete a file via the apply_patch tool.
The output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A call to a custom tool created by the model.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
A list of tool definitions that the model should be allowed to call.
For the Responses API, the list of tool definitions might look like:
1
2
3
4
5
[
{ "type": "function", "name": "get_weather" },
{ "type": "mcp", "server_label": "deepwiki" },
{ "type": "image_generation" }
]Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
The type of hosted tool the model should to use. Learn more about built-in tools.
Allowed values are:
file_searchweb_search_previewcomputer_use_previewcode_interpreterimage_generation
Use this option to force the model to call a specific function.
Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific custom tool.
Forces the model to call the apply_patch tool when executing a tool call.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Defines a function in your own code the model can choose to call. Learn more about function calling.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
The maximum number of results to return. This number should be between 1 and 50 inclusive.
Ranking options for search.
Weights that control how reciprocal rank fusion balances semantic embedding matches versus sparse keyword matches when hybrid search is enabled.
A tool that controls a virtual computer. Learn more about the computer tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
Filters for the search.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
List of allowed tool names or a filter object.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A tool that runs Python code to help generate a response to a prompt.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
A tool that generates images using a model like gpt-image-1.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1. Unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands.
A custom tool that processes input using a specified format. Learn more about custom tools
The input format for the custom tool. Default is unconstrained text.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Whether to run the model response in the background. Learn more.
The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
SDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
The unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
conversation state. Cannot be used in conjunction with conversation.
Reference to a prompt template and its variables. Learn more.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
gpt-5 and o-series models only
Configuration options for reasoning models.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort.
Deprecated: use summary instead.
A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of auto, concise, or detailed.
A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
The status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Default response format. Used to generate text responses.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
A description of what the response format is for, used by the model to determine how to respond in the format.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
A detailed breakdown of the input tokens.
The number of tokens that were retrieved from the cache. More on prompt caching.
A detailed breakdown of the output tokens.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
"type": "response.created",
"response": {
"id": "resp_67ccfcdd16748190a91872c75d38539e09e4d4aac714747c",
"object": "response",
"created_at": 1741487325,
"status": "in_progress",
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4o-2024-08-06",
"output": [],
"parallel_tool_calls": true,
"previous_response_id": null,
"reasoning": {
"effort": null,
"summary": null
},
"store": true,
"temperature": 1,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 1,
"truncation": "disabled",
"usage": null,
"user": null,
"metadata": {}
},
"sequence_number": 1
}response.in_progress
Emitted when the response is in progress.
The response that is in progress.
An error object returned when the model fails to generate a Response.
Details about why the response is incomplete.
A system (or developer) message inserted into the model's context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
A list of one or many input items to the model, containing different content types.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
An item representing part of the context for the response to be generated by the model. Can contain text, images, and audio inputs, as well as previous assistant responses and tool call outputs.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The status of item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The output of a computer tool call.
A computer screenshot image used with the computer use tool.
The safety checks reported by the API that have been acknowledged by the developer.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to run a function. See the function calling guide for more information.
The output of a function tool call.
Text, image, or file output of the function tool call.
A text input to the model.
An image input to the model. Learn about image inputs
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The unique ID of the function tool call output. Populated when this item is returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
The output of a local shell tool call.
A tool representing a request to execute one or more shell commands.
The shell commands and limits that describe how to run the tool call.
The streamed output items emitted by a shell tool call.
Captured chunks of stdout and stderr output, along with their associated outcomes.
The exit or timeout outcome associated with this shell call.
Indicates that the shell call exceeded its configured time limit.
A tool call representing a request to create, delete, or update files using diff patches.
The specific create, delete, or update instruction for the apply_patch tool call.
Instruction for creating a new file via the apply_patch tool.
Instruction for deleting an existing file via the apply_patch tool.
The streamed output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A response to an MCP approval request.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
The output of a custom tool call from your code, being sent back to the model.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
Text, image, or file output of the custom tool call.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
A call to a custom tool created by the model.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
An array of content items generated by the model.
- The length and order of items in the
outputarray is dependent on the model's response. - Rather than accessing the first item in the
outputarray and assuming it's anassistantmessage with the content generated by the model, you might consider using theoutput_textproperty where supported in SDKs.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to run a function. See the function calling guide for more information.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
A tool call that executes one or more shell commands in a managed environment.
The shell commands and limits that describe how to run the tool call.
The output of a shell tool call.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
An array of shell call output contents
Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.
Indicates that the shell call exceeded its configured time limit.
A tool call that applies file diffs by creating, deleting, or updating files.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
Instruction describing how to create a file via the apply_patch tool.
Instruction describing how to delete a file via the apply_patch tool.
The output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A call to a custom tool created by the model.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
A list of tool definitions that the model should be allowed to call.
For the Responses API, the list of tool definitions might look like:
1
2
3
4
5
[
{ "type": "function", "name": "get_weather" },
{ "type": "mcp", "server_label": "deepwiki" },
{ "type": "image_generation" }
]Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
The type of hosted tool the model should to use. Learn more about built-in tools.
Allowed values are:
file_searchweb_search_previewcomputer_use_previewcode_interpreterimage_generation
Use this option to force the model to call a specific function.
Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific custom tool.
Forces the model to call the apply_patch tool when executing a tool call.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Defines a function in your own code the model can choose to call. Learn more about function calling.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
The maximum number of results to return. This number should be between 1 and 50 inclusive.
Ranking options for search.
Weights that control how reciprocal rank fusion balances semantic embedding matches versus sparse keyword matches when hybrid search is enabled.
A tool that controls a virtual computer. Learn more about the computer tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
Filters for the search.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
List of allowed tool names or a filter object.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A tool that runs Python code to help generate a response to a prompt.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
A tool that generates images using a model like gpt-image-1.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1. Unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands.
A custom tool that processes input using a specified format. Learn more about custom tools
The input format for the custom tool. Default is unconstrained text.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Whether to run the model response in the background. Learn more.
The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
SDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
The unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
conversation state. Cannot be used in conjunction with conversation.
Reference to a prompt template and its variables. Learn more.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
gpt-5 and o-series models only
Configuration options for reasoning models.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort.
Deprecated: use summary instead.
A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of auto, concise, or detailed.
A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
The status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Default response format. Used to generate text responses.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
A description of what the response format is for, used by the model to determine how to respond in the format.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
A detailed breakdown of the input tokens.
The number of tokens that were retrieved from the cache. More on prompt caching.
A detailed breakdown of the output tokens.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
{
"type": "response.in_progress",
"response": {
"id": "resp_67ccfcdd16748190a91872c75d38539e09e4d4aac714747c",
"object": "response",
"created_at": 1741487325,
"status": "in_progress",
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4o-2024-08-06",
"output": [],
"parallel_tool_calls": true,
"previous_response_id": null,
"reasoning": {
"effort": null,
"summary": null
},
"store": true,
"temperature": 1,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 1,
"truncation": "disabled",
"usage": null,
"user": null,
"metadata": {}
},
"sequence_number": 1
}response.completed
Emitted when the model response is complete.
Properties of the completed response.
An error object returned when the model fails to generate a Response.
Details about why the response is incomplete.
A system (or developer) message inserted into the model's context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
A list of one or many input items to the model, containing different content types.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
An item representing part of the context for the response to be generated by the model. Can contain text, images, and audio inputs, as well as previous assistant responses and tool call outputs.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The status of item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The output of a computer tool call.
A computer screenshot image used with the computer use tool.
The safety checks reported by the API that have been acknowledged by the developer.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to run a function. See the function calling guide for more information.
The output of a function tool call.
Text, image, or file output of the function tool call.
A text input to the model.
An image input to the model. Learn about image inputs
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The unique ID of the function tool call output. Populated when this item is returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
The output of a local shell tool call.
A tool representing a request to execute one or more shell commands.
The shell commands and limits that describe how to run the tool call.
The streamed output items emitted by a shell tool call.
Captured chunks of stdout and stderr output, along with their associated outcomes.
The exit or timeout outcome associated with this shell call.
Indicates that the shell call exceeded its configured time limit.
A tool call representing a request to create, delete, or update files using diff patches.
The specific create, delete, or update instruction for the apply_patch tool call.
Instruction for creating a new file via the apply_patch tool.
Instruction for deleting an existing file via the apply_patch tool.
The streamed output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A response to an MCP approval request.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
The output of a custom tool call from your code, being sent back to the model.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
Text, image, or file output of the custom tool call.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
A call to a custom tool created by the model.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
An array of content items generated by the model.
- The length and order of items in the
outputarray is dependent on the model's response. - Rather than accessing the first item in the
outputarray and assuming it's anassistantmessage with the content generated by the model, you might consider using theoutput_textproperty where supported in SDKs.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to run a function. See the function calling guide for more information.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
A tool call that executes one or more shell commands in a managed environment.
The shell commands and limits that describe how to run the tool call.
The output of a shell tool call.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
An array of shell call output contents
Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.
Indicates that the shell call exceeded its configured time limit.
A tool call that applies file diffs by creating, deleting, or updating files.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
Instruction describing how to create a file via the apply_patch tool.
Instruction describing how to delete a file via the apply_patch tool.
The output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A call to a custom tool created by the model.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
A list of tool definitions that the model should be allowed to call.
For the Responses API, the list of tool definitions might look like:
1
2
3
4
5
[
{ "type": "function", "name": "get_weather" },
{ "type": "mcp", "server_label": "deepwiki" },
{ "type": "image_generation" }
]Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
The type of hosted tool the model should to use. Learn more about built-in tools.
Allowed values are:
file_searchweb_search_previewcomputer_use_previewcode_interpreterimage_generation
Use this option to force the model to call a specific function.
Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific custom tool.
Forces the model to call the apply_patch tool when executing a tool call.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Defines a function in your own code the model can choose to call. Learn more about function calling.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
The maximum number of results to return. This number should be between 1 and 50 inclusive.
Ranking options for search.
Weights that control how reciprocal rank fusion balances semantic embedding matches versus sparse keyword matches when hybrid search is enabled.
A tool that controls a virtual computer. Learn more about the computer tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
Filters for the search.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
List of allowed tool names or a filter object.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A tool that runs Python code to help generate a response to a prompt.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
A tool that generates images using a model like gpt-image-1.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1. Unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands.
A custom tool that processes input using a specified format. Learn more about custom tools
The input format for the custom tool. Default is unconstrained text.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Whether to run the model response in the background. Learn more.
The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
SDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
The unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
conversation state. Cannot be used in conjunction with conversation.
Reference to a prompt template and its variables. Learn more.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
gpt-5 and o-series models only
Configuration options for reasoning models.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort.
Deprecated: use summary instead.
A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of auto, concise, or detailed.
A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
The status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Default response format. Used to generate text responses.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
A description of what the response format is for, used by the model to determine how to respond in the format.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
A detailed breakdown of the input tokens.
The number of tokens that were retrieved from the cache. More on prompt caching.
A detailed breakdown of the output tokens.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
{
"type": "response.completed",
"response": {
"id": "resp_123",
"object": "response",
"created_at": 1740855869,
"status": "completed",
"error": null,
"incomplete_details": null,
"input": [],
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4o-mini-2024-07-18",
"output": [
{
"id": "msg_123",
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "In a shimmering forest under a sky full of stars, a lonely unicorn named Lila discovered a hidden pond that glowed with moonlight. Every night, she would leave sparkling, magical flowers by the water's edge, hoping to share her beauty with others. One enchanting evening, she woke to find a group of friendly animals gathered around, eager to be friends and share in her magic.",
"annotations": []
}
]
}
],
"previous_response_id": null,
"reasoning_effort": null,
"store": false,
"temperature": 1,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 1,
"truncation": "disabled",
"usage": {
"input_tokens": 0,
"output_tokens": 0,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 0
},
"user": null,
"metadata": {}
},
"sequence_number": 1
}response.failed
An event that is emitted when a response fails.
The response that failed.
An error object returned when the model fails to generate a Response.
Details about why the response is incomplete.
A system (or developer) message inserted into the model's context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
A list of one or many input items to the model, containing different content types.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
An item representing part of the context for the response to be generated by the model. Can contain text, images, and audio inputs, as well as previous assistant responses and tool call outputs.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The status of item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The output of a computer tool call.
A computer screenshot image used with the computer use tool.
The safety checks reported by the API that have been acknowledged by the developer.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to run a function. See the function calling guide for more information.
The output of a function tool call.
Text, image, or file output of the function tool call.
A text input to the model.
An image input to the model. Learn about image inputs
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The unique ID of the function tool call output. Populated when this item is returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
The output of a local shell tool call.
A tool representing a request to execute one or more shell commands.
The shell commands and limits that describe how to run the tool call.
The streamed output items emitted by a shell tool call.
Captured chunks of stdout and stderr output, along with their associated outcomes.
The exit or timeout outcome associated with this shell call.
Indicates that the shell call exceeded its configured time limit.
A tool call representing a request to create, delete, or update files using diff patches.
The specific create, delete, or update instruction for the apply_patch tool call.
Instruction for creating a new file via the apply_patch tool.
Instruction for deleting an existing file via the apply_patch tool.
The streamed output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A response to an MCP approval request.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
The output of a custom tool call from your code, being sent back to the model.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
Text, image, or file output of the custom tool call.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
A call to a custom tool created by the model.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
An array of content items generated by the model.
- The length and order of items in the
outputarray is dependent on the model's response. - Rather than accessing the first item in the
outputarray and assuming it's anassistantmessage with the content generated by the model, you might consider using theoutput_textproperty where supported in SDKs.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to run a function. See the function calling guide for more information.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
A tool call that executes one or more shell commands in a managed environment.
The shell commands and limits that describe how to run the tool call.
The output of a shell tool call.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
An array of shell call output contents
Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.
Indicates that the shell call exceeded its configured time limit.
A tool call that applies file diffs by creating, deleting, or updating files.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
Instruction describing how to create a file via the apply_patch tool.
Instruction describing how to delete a file via the apply_patch tool.
The output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A call to a custom tool created by the model.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
A list of tool definitions that the model should be allowed to call.
For the Responses API, the list of tool definitions might look like:
1
2
3
4
5
[
{ "type": "function", "name": "get_weather" },
{ "type": "mcp", "server_label": "deepwiki" },
{ "type": "image_generation" }
]Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
The type of hosted tool the model should to use. Learn more about built-in tools.
Allowed values are:
file_searchweb_search_previewcomputer_use_previewcode_interpreterimage_generation
Use this option to force the model to call a specific function.
Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific custom tool.
Forces the model to call the apply_patch tool when executing a tool call.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Defines a function in your own code the model can choose to call. Learn more about function calling.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
The maximum number of results to return. This number should be between 1 and 50 inclusive.
Ranking options for search.
Weights that control how reciprocal rank fusion balances semantic embedding matches versus sparse keyword matches when hybrid search is enabled.
A tool that controls a virtual computer. Learn more about the computer tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
Filters for the search.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
List of allowed tool names or a filter object.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A tool that runs Python code to help generate a response to a prompt.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
A tool that generates images using a model like gpt-image-1.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1. Unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands.
A custom tool that processes input using a specified format. Learn more about custom tools
The input format for the custom tool. Default is unconstrained text.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Whether to run the model response in the background. Learn more.
The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
SDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
The unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
conversation state. Cannot be used in conjunction with conversation.
Reference to a prompt template and its variables. Learn more.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
gpt-5 and o-series models only
Configuration options for reasoning models.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort.
Deprecated: use summary instead.
A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of auto, concise, or detailed.
A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
The status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Default response format. Used to generate text responses.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
A description of what the response format is for, used by the model to determine how to respond in the format.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
A detailed breakdown of the input tokens.
The number of tokens that were retrieved from the cache. More on prompt caching.
A detailed breakdown of the output tokens.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
{
"type": "response.failed",
"response": {
"id": "resp_123",
"object": "response",
"created_at": 1740855869,
"status": "failed",
"error": {
"code": "server_error",
"message": "The model failed to generate a response."
},
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4o-mini-2024-07-18",
"output": [],
"previous_response_id": null,
"reasoning_effort": null,
"store": false,
"temperature": 1,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 1,
"truncation": "disabled",
"usage": null,
"user": null,
"metadata": {}
}
}response.incomplete
An event that is emitted when a response finishes as incomplete.
The response that was incomplete.
An error object returned when the model fails to generate a Response.
Details about why the response is incomplete.
A system (or developer) message inserted into the model's context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
A list of one or many input items to the model, containing different content types.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
An item representing part of the context for the response to be generated by the model. Can contain text, images, and audio inputs, as well as previous assistant responses and tool call outputs.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The status of item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The output of a computer tool call.
A computer screenshot image used with the computer use tool.
The safety checks reported by the API that have been acknowledged by the developer.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to run a function. See the function calling guide for more information.
The output of a function tool call.
Text, image, or file output of the function tool call.
A text input to the model.
An image input to the model. Learn about image inputs
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The unique ID of the function tool call output. Populated when this item is returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
The output of a local shell tool call.
A tool representing a request to execute one or more shell commands.
The shell commands and limits that describe how to run the tool call.
The streamed output items emitted by a shell tool call.
Captured chunks of stdout and stderr output, along with their associated outcomes.
The exit or timeout outcome associated with this shell call.
Indicates that the shell call exceeded its configured time limit.
A tool call representing a request to create, delete, or update files using diff patches.
The specific create, delete, or update instruction for the apply_patch tool call.
Instruction for creating a new file via the apply_patch tool.
Instruction for deleting an existing file via the apply_patch tool.
The streamed output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A response to an MCP approval request.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
The output of a custom tool call from your code, being sent back to the model.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
Text, image, or file output of the custom tool call.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
A call to a custom tool created by the model.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
An array of content items generated by the model.
- The length and order of items in the
outputarray is dependent on the model's response. - Rather than accessing the first item in the
outputarray and assuming it's anassistantmessage with the content generated by the model, you might consider using theoutput_textproperty where supported in SDKs.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to run a function. See the function calling guide for more information.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
A tool call that executes one or more shell commands in a managed environment.
The shell commands and limits that describe how to run the tool call.
The output of a shell tool call.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
An array of shell call output contents
Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.
Indicates that the shell call exceeded its configured time limit.
A tool call that applies file diffs by creating, deleting, or updating files.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
Instruction describing how to create a file via the apply_patch tool.
Instruction describing how to delete a file via the apply_patch tool.
The output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A call to a custom tool created by the model.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
A list of tool definitions that the model should be allowed to call.
For the Responses API, the list of tool definitions might look like:
1
2
3
4
5
[
{ "type": "function", "name": "get_weather" },
{ "type": "mcp", "server_label": "deepwiki" },
{ "type": "image_generation" }
]Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
The type of hosted tool the model should to use. Learn more about built-in tools.
Allowed values are:
file_searchweb_search_previewcomputer_use_previewcode_interpreterimage_generation
Use this option to force the model to call a specific function.
Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific custom tool.
Forces the model to call the apply_patch tool when executing a tool call.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Defines a function in your own code the model can choose to call. Learn more about function calling.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
The maximum number of results to return. This number should be between 1 and 50 inclusive.
Ranking options for search.
Weights that control how reciprocal rank fusion balances semantic embedding matches versus sparse keyword matches when hybrid search is enabled.
A tool that controls a virtual computer. Learn more about the computer tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
Filters for the search.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
List of allowed tool names or a filter object.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A tool that runs Python code to help generate a response to a prompt.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
A tool that generates images using a model like gpt-image-1.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1. Unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands.
A custom tool that processes input using a specified format. Learn more about custom tools
The input format for the custom tool. Default is unconstrained text.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Whether to run the model response in the background. Learn more.
The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
SDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
The unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
conversation state. Cannot be used in conjunction with conversation.
Reference to a prompt template and its variables. Learn more.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
gpt-5 and o-series models only
Configuration options for reasoning models.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort.
Deprecated: use summary instead.
A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of auto, concise, or detailed.
A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
The status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Default response format. Used to generate text responses.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
A description of what the response format is for, used by the model to determine how to respond in the format.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
A detailed breakdown of the input tokens.
The number of tokens that were retrieved from the cache. More on prompt caching.
A detailed breakdown of the output tokens.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
{
"type": "response.incomplete",
"response": {
"id": "resp_123",
"object": "response",
"created_at": 1740855869,
"status": "incomplete",
"error": null,
"incomplete_details": {
"reason": "max_tokens"
},
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4o-mini-2024-07-18",
"output": [],
"previous_response_id": null,
"reasoning_effort": null,
"store": false,
"temperature": 1,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 1,
"truncation": "disabled",
"usage": null,
"user": null,
"metadata": {}
},
"sequence_number": 1
}response.output_item.added
Emitted when a new output item is added.
The output item that was added.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to run a function. See the function calling guide for more information.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
A tool call that executes one or more shell commands in a managed environment.
The shell commands and limits that describe how to run the tool call.
The output of a shell tool call.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
An array of shell call output contents
Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.
Indicates that the shell call exceeded its configured time limit.
A tool call that applies file diffs by creating, deleting, or updating files.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
Instruction describing how to create a file via the apply_patch tool.
Instruction describing how to delete a file via the apply_patch tool.
The output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A call to a custom tool created by the model.
1
2
3
4
5
6
7
8
9
10
11
12
{
"type": "response.output_item.added",
"output_index": 0,
"item": {
"id": "msg_123",
"status": "in_progress",
"type": "message",
"role": "assistant",
"content": []
},
"sequence_number": 1
}response.output_item.done
Emitted when an output item is marked done.
The output item that was marked done.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to run a function. See the function calling guide for more information.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
A tool call that executes one or more shell commands in a managed environment.
The shell commands and limits that describe how to run the tool call.
The output of a shell tool call.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
An array of shell call output contents
Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.
Indicates that the shell call exceeded its configured time limit.
A tool call that applies file diffs by creating, deleting, or updating files.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
Instruction describing how to create a file via the apply_patch tool.
Instruction describing how to delete a file via the apply_patch tool.
The output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A call to a custom tool created by the model.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"type": "response.output_item.done",
"output_index": 0,
"item": {
"id": "msg_123",
"status": "completed",
"type": "message",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "In a shimmering forest under a sky full of stars, a lonely unicorn named Lila discovered a hidden pond that glowed with moonlight. Every night, she would leave sparkling, magical flowers by the water's edge, hoping to share her beauty with others. One enchanting evening, she woke to find a group of friendly animals gathered around, eager to be friends and share in her magic.",
"annotations": []
}
]
},
"sequence_number": 1
}response.content_part.added
Emitted when a new content part is added.
The content part that was added.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
A refusal from the model.
1
2
3
4
5
6
7
8
9
10
11
12
{
"type": "response.content_part.added",
"item_id": "msg_123",
"output_index": 0,
"content_index": 0,
"part": {
"type": "output_text",
"text": "",
"annotations": []
},
"sequence_number": 1
}response.content_part.done
Emitted when a content part is done.
The content part that is done.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
A refusal from the model.
1
2
3
4
5
6
7
8
9
10
11
12
{
"type": "response.content_part.done",
"item_id": "msg_123",
"output_index": 0,
"content_index": 0,
"sequence_number": 1,
"part": {
"type": "output_text",
"text": "In a shimmering forest under a sky full of stars, a lonely unicorn named Lila discovered a hidden pond that glowed with moonlight. Every night, she would leave sparkling, magical flowers by the water's edge, hoping to share her beauty with others. One enchanting evening, she woke to find a group of friendly animals gathered around, eager to be friends and share in her magic.",
"annotations": []
}
}response.output_text.delta
Emitted when there is an additional text delta.
The log probabilities of the tokens in the delta.
1
2
3
4
5
6
7
8
{
"type": "response.output_text.delta",
"item_id": "msg_123",
"output_index": 0,
"content_index": 0,
"delta": "In",
"sequence_number": 1
}response.output_text.done
Emitted when text content is finalized.
The log probabilities of the tokens in the delta.
1
2
3
4
5
6
7
8
{
"type": "response.output_text.done",
"item_id": "msg_123",
"output_index": 0,
"content_index": 0,
"text": "In a shimmering forest under a sky full of stars, a lonely unicorn named Lila discovered a hidden pond that glowed with moonlight. Every night, she would leave sparkling, magical flowers by the water's edge, hoping to share her beauty with others. One enchanting evening, she woke to find a group of friendly animals gathered around, eager to be friends and share in her magic.",
"sequence_number": 1
}response.refusal.delta
Emitted when there is a partial refusal text.
1
2
3
4
5
6
7
8
{
"type": "response.refusal.delta",
"item_id": "msg_123",
"output_index": 0,
"content_index": 0,
"delta": "refusal text so far",
"sequence_number": 1
}response.refusal.done
Emitted when refusal text is finalized.
1
2
3
4
5
6
7
8
{
"type": "response.refusal.done",
"item_id": "item-abc",
"output_index": 1,
"content_index": 2,
"refusal": "final refusal text",
"sequence_number": 1
}response.function_call_arguments.delta
Emitted when there is a partial function-call arguments delta.
1
2
3
4
5
6
7
{
"type": "response.function_call_arguments.delta",
"item_id": "item-abc",
"output_index": 0,
"delta": "{ \"arg\":"
"sequence_number": 1
}response.function_call_arguments.done
Emitted when function-call arguments are finalized.
1
2
3
4
5
6
7
8
{
"type": "response.function_call_arguments.done",
"item_id": "item-abc",
"name": "get_weather",
"output_index": 1,
"arguments": "{ \"arg\": 123 }",
"sequence_number": 1
}response.file_search_call.in_progress
Emitted when a file search call is initiated.
1
2
3
4
5
6
{
"type": "response.file_search_call.in_progress",
"output_index": 0,
"item_id": "fs_123",
"sequence_number": 1
}response.file_search_call.searching
Emitted when a file search is currently searching.
1
2
3
4
5
6
{
"type": "response.file_search_call.searching",
"output_index": 0,
"item_id": "fs_123",
"sequence_number": 1
}response.file_search_call.completed
Emitted when a file search call is completed (results found).
1
2
3
4
5
6
{
"type": "response.file_search_call.completed",
"output_index": 0,
"item_id": "fs_123",
"sequence_number": 1
}response.web_search_call.in_progress
Emitted when a web search call is initiated.
1
2
3
4
5
6
{
"type": "response.web_search_call.in_progress",
"output_index": 0,
"item_id": "ws_123",
"sequence_number": 0
}response.web_search_call.searching
Emitted when a web search call is executing.
1
2
3
4
5
6
{
"type": "response.web_search_call.searching",
"output_index": 0,
"item_id": "ws_123",
"sequence_number": 0
}response.web_search_call.completed
Emitted when a web search call is completed.
1
2
3
4
5
6
{
"type": "response.web_search_call.completed",
"output_index": 0,
"item_id": "ws_123",
"sequence_number": 0
}response.reasoning_summary_part.added
Emitted when a new reasoning summary part is added.
The summary part that was added.
1
2
3
4
5
6
7
8
9
10
11
{
"type": "response.reasoning_summary_part.added",
"item_id": "rs_6806bfca0b2481918a5748308061a2600d3ce51bdffd5476",
"output_index": 0,
"summary_index": 0,
"part": {
"type": "summary_text",
"text": ""
},
"sequence_number": 1
}response.reasoning_summary_part.done
Emitted when a reasoning summary part is completed.
The completed summary part.
1
2
3
4
5
6
7
8
9
10
11
{
"type": "response.reasoning_summary_part.done",
"item_id": "rs_6806bfca0b2481918a5748308061a2600d3ce51bdffd5476",
"output_index": 0,
"summary_index": 0,
"part": {
"type": "summary_text",
"text": "**Responding to a greeting**\n\nThe user just said, \"Hello!\" So, it seems I need to engage. I'll greet them back and offer help since they're looking to chat. I could say something like, \"Hello! How can I assist you today?\" That feels friendly and open. They didn't ask a specific question, so this approach will work well for starting a conversation. Let's see where it goes from there!"
},
"sequence_number": 1
}response.reasoning_summary_text.delta
Emitted when a delta is added to a reasoning summary text.
1
2
3
4
5
6
7
8
{
"type": "response.reasoning_summary_text.delta",
"item_id": "rs_6806bfca0b2481918a5748308061a2600d3ce51bdffd5476",
"output_index": 0,
"summary_index": 0,
"delta": "**Responding to a greeting**\n\nThe user just said, \"Hello!\" So, it seems I need to engage. I'll greet them back and offer help since they're looking to chat. I could say something like, \"Hello! How can I assist you today?\" That feels friendly and open. They didn't ask a specific question, so this approach will work well for starting a conversation. Let's see where it goes from there!",
"sequence_number": 1
}response.reasoning_summary_text.done
Emitted when a reasoning summary text is completed.
1
2
3
4
5
6
7
8
{
"type": "response.reasoning_summary_text.done",
"item_id": "rs_6806bfca0b2481918a5748308061a2600d3ce51bdffd5476",
"output_index": 0,
"summary_index": 0,
"text": "**Responding to a greeting**\n\nThe user just said, \"Hello!\" So, it seems I need to engage. I'll greet them back and offer help since they're looking to chat. I could say something like, \"Hello! How can I assist you today?\" That feels friendly and open. They didn't ask a specific question, so this approach will work well for starting a conversation. Let's see where it goes from there!",
"sequence_number": 1
}response.reasoning_text.delta
Emitted when a delta is added to a reasoning text.
1
2
3
4
5
6
7
8
{
"type": "response.reasoning_text.delta",
"item_id": "rs_123",
"output_index": 0,
"content_index": 0,
"delta": "The",
"sequence_number": 1
}response.reasoning_text.done
Emitted when a reasoning text is completed.
1
2
3
4
5
6
7
8
{
"type": "response.reasoning_text.done",
"item_id": "rs_123",
"output_index": 0,
"content_index": 0,
"text": "The user is asking...",
"sequence_number": 4
}response.image_generation_call.completed
Emitted when an image generation tool call has completed and the final image is available.
1
2
3
4
5
6
{
"type": "response.image_generation_call.completed",
"output_index": 0,
"item_id": "item-123",
"sequence_number": 1
}response.image_generation_call.generating
Emitted when an image generation tool call is actively generating an image (intermediate state).
1
2
3
4
5
6
{
"type": "response.image_generation_call.generating",
"output_index": 0,
"item_id": "item-123",
"sequence_number": 0
}response.image_generation_call.in_progress
Emitted when an image generation tool call is in progress.
1
2
3
4
5
6
{
"type": "response.image_generation_call.in_progress",
"output_index": 0,
"item_id": "item-123",
"sequence_number": 0
}response.image_generation_call.partial_image
Emitted when a partial image is available during image generation streaming.
0-based index for the partial image (backend is 1-based, but this is 0-based for the user).
1
2
3
4
5
6
7
8
{
"type": "response.image_generation_call.partial_image",
"output_index": 0,
"item_id": "item-123",
"sequence_number": 0,
"partial_image_index": 0,
"partial_image_b64": "..."
}response.mcp_call_arguments.delta
Emitted when there is a delta (partial update) to the arguments of an MCP tool call.
1
2
3
4
5
6
7
{
"type": "response.mcp_call_arguments.delta",
"output_index": 0,
"item_id": "item-abc",
"delta": "{",
"sequence_number": 1
}response.mcp_call_arguments.done
Emitted when the arguments for an MCP tool call are finalized.
1
2
3
4
5
6
7
{
"type": "response.mcp_call_arguments.done",
"output_index": 0,
"item_id": "item-abc",
"arguments": "{\"arg1\": \"value1\", \"arg2\": \"value2\"}",
"sequence_number": 1
}response.mcp_call.completed
Emitted when an MCP tool call has completed successfully.
1
2
3
4
5
6
{
"type": "response.mcp_call.completed",
"sequence_number": 1,
"item_id": "mcp_682d437d90a88191bf88cd03aae0c3e503937d5f622d7a90",
"output_index": 0
}response.mcp_call.failed
Emitted when an MCP tool call has failed.
1
2
3
4
5
6
{
"type": "response.mcp_call.failed",
"sequence_number": 1,
"item_id": "mcp_682d437d90a88191bf88cd03aae0c3e503937d5f622d7a90",
"output_index": 0
}response.mcp_call.in_progress
Emitted when an MCP tool call is in progress.
1
2
3
4
5
6
{
"type": "response.mcp_call.in_progress",
"sequence_number": 1,
"output_index": 0,
"item_id": "mcp_682d437d90a88191bf88cd03aae0c3e503937d5f622d7a90"
}response.mcp_list_tools.completed
Emitted when the list of available MCP tools has been successfully retrieved.
1
2
3
4
5
6
{
"type": "response.mcp_list_tools.completed",
"sequence_number": 1,
"output_index": 0,
"item_id": "mcpl_682d4379df088191886b70f4ec39f90403937d5f622d7a90"
}response.mcp_list_tools.failed
Emitted when the attempt to list available MCP tools has failed.
1
2
3
4
5
6
{
"type": "response.mcp_list_tools.failed",
"sequence_number": 1,
"output_index": 0,
"item_id": "mcpl_682d4379df088191886b70f4ec39f90403937d5f622d7a90"
}response.mcp_list_tools.in_progress
Emitted when the system is in the process of retrieving the list of available MCP tools.
1
2
3
4
5
6
{
"type": "response.mcp_list_tools.in_progress",
"sequence_number": 1,
"output_index": 0,
"item_id": "mcpl_682d4379df088191886b70f4ec39f90403937d5f622d7a90"
}response.code_interpreter_call.in_progress
Emitted when a code interpreter call is in progress.
The index of the output item in the response for which the code interpreter call is in progress.
1
2
3
4
5
6
{
"type": "response.code_interpreter_call.in_progress",
"output_index": 0,
"item_id": "ci_12345",
"sequence_number": 1
}response.code_interpreter_call.interpreting
Emitted when the code interpreter is actively interpreting the code snippet.
The index of the output item in the response for which the code interpreter is interpreting code.
1
2
3
4
5
6
{
"type": "response.code_interpreter_call.interpreting",
"output_index": 4,
"item_id": "ci_12345",
"sequence_number": 1
}response.code_interpreter_call.completed
Emitted when the code interpreter call is completed.
The index of the output item in the response for which the code interpreter call is completed.
1
2
3
4
5
6
{
"type": "response.code_interpreter_call.completed",
"output_index": 5,
"item_id": "ci_12345",
"sequence_number": 1
}response.code_interpreter_call_code.delta
Emitted when a partial code snippet is streamed by the code interpreter.
The index of the output item in the response for which the code is being streamed.
1
2
3
4
5
6
7
{
"type": "response.code_interpreter_call_code.delta",
"output_index": 0,
"item_id": "ci_12345",
"delta": "print('Hello, world')",
"sequence_number": 1
}response.code_interpreter_call_code.done
Emitted when the code snippet is finalized by the code interpreter.
1
2
3
4
5
6
7
{
"type": "response.code_interpreter_call_code.done",
"output_index": 3,
"item_id": "ci_12345",
"code": "print('done')",
"sequence_number": 1
}response.output_text.annotation.added
Emitted when an annotation is added to output text content.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
{
"type": "response.output_text.annotation.added",
"item_id": "item-abc",
"output_index": 0,
"content_index": 0,
"annotation_index": 0,
"annotation": {
"type": "text_annotation",
"text": "This is a test annotation",
"start": 0,
"end": 10
},
"sequence_number": 1
}response.queued
Emitted when a response is queued and waiting to be processed.
The full response object that is queued.
An error object returned when the model fails to generate a Response.
Details about why the response is incomplete.
A system (or developer) message inserted into the model's context.
When using along with previous_response_id, the instructions from a previous
response will not be carried over to the next response. This makes it simple
to swap out system (or developer) messages in new responses.
A list of one or many input items to the model, containing different content types.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role. Messages with the
assistant role are presumed to have been generated by the model in previous
interactions.
Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
An item representing part of the context for the response to be generated by the model. Can contain text, images, and audio inputs, as well as previous assistant responses and tool call outputs.
A message input to the model with a role indicating instruction following
hierarchy. Instructions given with the developer or system role take
precedence over instructions given with the user role.
A list of one or many input items to the model, containing different content types.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The status of item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
The output of a computer tool call.
A computer screenshot image used with the computer use tool.
The safety checks reported by the API that have been acknowledged by the developer.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to run a function. See the function calling guide for more information.
The output of a function tool call.
Text, image, or file output of the function tool call.
A text input to the model.
An image input to the model. Learn about image inputs
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
The unique ID of the function tool call output. Populated when this item is returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
The output of a local shell tool call.
A tool representing a request to execute one or more shell commands.
The shell commands and limits that describe how to run the tool call.
The streamed output items emitted by a shell tool call.
Captured chunks of stdout and stderr output, along with their associated outcomes.
The exit or timeout outcome associated with this shell call.
Indicates that the shell call exceeded its configured time limit.
A tool call representing a request to create, delete, or update files using diff patches.
The specific create, delete, or update instruction for the apply_patch tool call.
Instruction for creating a new file via the apply_patch tool.
Instruction for deleting an existing file via the apply_patch tool.
The streamed output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A response to an MCP approval request.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
The output of a custom tool call from your code, being sent back to the model.
The output from the custom tool call generated by your code. Can be a string or an list of output content.
Text, image, or file output of the custom tool call.
A text input to the model.
An image input to the model. Learn about image inputs.
The detail level of the image to be sent to the model. One of high, low, or auto. Defaults to auto.
A file input to the model.
A call to a custom tool created by the model.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
Model ID used to generate the response, like gpt-4o or o3. OpenAI
offers a wide range of models with different capabilities, performance
characteristics, and price points. Refer to the model guide
to browse and compare available models.
An array of content items generated by the model.
- The length and order of items in the
outputarray is dependent on the model's response. - Rather than accessing the first item in the
outputarray and assuming it's anassistantmessage with the content generated by the model, you might consider using theoutput_textproperty where supported in SDKs.
An output message from the model.
The content of the output message.
A text output from the model.
The annotations of the text output.
A citation to a file.
A citation for a web resource used to generate a model response.
A citation for a container file used to generate a model response.
The status of the message input. One of in_progress, completed, or
incomplete. Populated when input items are returned via API.
The results of a file search tool call. See the file search guide for more information.
The status of the file search tool call. One of in_progress,
searching, incomplete or failed,
The results of the file search tool call.
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard. Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters, booleans, or numbers.
A tool call to run a function. See the function calling guide for more information.
The results of a web search tool call. See the web search guide for more information.
An object describing the specific action taken in this web search call. Includes details on how the model used the web (search, open_page, find).
Action type "search" - Performs a web search query.
Action type "open_page" - Opens a specific URL from search results.
A tool call to a computer use tool. See the computer use guide for more information.
A click action.
Indicates which mouse button was pressed during the click. One of left, right, wheel, back, or forward.
A double click action.
A drag action.
An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg
1
2
3
4
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]A collection of keypresses the model would like to perform.
A mouse move action.
A screenshot action.
A scroll action.
An action to type in text.
The pending safety checks for the computer call.
The status of the item. One of in_progress, completed, or
incomplete. Populated when items are returned via API.
A description of the chain of thought used by a reasoning model while generating
a response. Be sure to include these items in your input to the Responses API
for subsequent turns of a conversation if you are manually
managing context.
Reasoning summary content.
Reasoning text content.
The encrypted content of the reasoning item - populated when a response is
generated with reasoning.encrypted_content in the include parameter.
An image generation request made by the model.
A tool call to run code.
The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available.
The logs output from the code interpreter.
The status of the code interpreter tool call. Valid values are in_progress, completed, incomplete, interpreting, and failed.
A tool call to run a command on the local shell.
Execute a shell command on the server.
A tool call that executes one or more shell commands in a managed environment.
The shell commands and limits that describe how to run the tool call.
The output of a shell tool call.
The maximum length of the shell command output. This is generated by the model and should be passed back with the raw output.
An array of shell call output contents
Represents either an exit outcome (with an exit code) or a timeout outcome for a shell call output chunk.
Indicates that the shell call exceeded its configured time limit.
A tool call that applies file diffs by creating, deleting, or updating files.
One of the create_file, delete_file, or update_file operations applied via apply_patch.
Instruction describing how to create a file via the apply_patch tool.
Instruction describing how to delete a file via the apply_patch tool.
The output emitted by an apply patch tool call.
The unique ID of the apply patch tool call output. Populated when this item is returned via API.
An invocation of a tool on an MCP server.
Unique identifier for the MCP tool call approval request.
Include this value in a subsequent mcp_approval_response input to approve or reject the corresponding tool call.
A list of tools available on an MCP server.
The tools available on the server.
A request for human approval of a tool invocation.
A call to a custom tool created by the model.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
How the model should select which tool (or tools) to use when generating
a response. See the tools parameter to see how to specify which tools
the model can call.
Controls which (if any) tool is called by the model.
none means the model will not call any tool and instead generates a message.
auto means the model can pick between generating a message or calling one or
more tools.
required means the model must call one or more tools.
Constrains the tools available to the model to a pre-defined set.
Constrains the tools available to the model to a pre-defined set.
auto allows the model to pick from among the allowed tools and generate a
message.
required requires the model to call one or more of the allowed tools.
A list of tool definitions that the model should be allowed to call.
For the Responses API, the list of tool definitions might look like:
1
2
3
4
5
[
{ "type": "function", "name": "get_weather" },
{ "type": "mcp", "server_label": "deepwiki" },
{ "type": "image_generation" }
]Indicates that the model should use a built-in tool to generate a response. Learn more about built-in tools.
The type of hosted tool the model should to use. Learn more about built-in tools.
Allowed values are:
file_searchweb_search_previewcomputer_use_previewcode_interpreterimage_generation
Use this option to force the model to call a specific function.
Use this option to force the model to call a specific tool on a remote MCP server.
Use this option to force the model to call a specific custom tool.
Forces the model to call the apply_patch tool when executing a tool call.
An array of tools the model may call while generating a response. You
can specify which tool to use by setting the tool_choice parameter.
We support the following categories of tools:
- Built-in tools: Tools that are provided by OpenAI that extend the model's capabilities, like web search or file search. Learn more about built-in tools.
- MCP Tools: Integrations with third-party systems via custom MCP servers or predefined connectors such as Google Drive and SharePoint. Learn more about MCP Tools.
- Function calls (custom tools): Functions that are defined by you, enabling the model to call your own code with strongly typed arguments and outputs. Learn more about function calling. You can also use custom tools to call your own code.
Defines a function in your own code the model can choose to call. Learn more about function calling.
A tool that searches for relevant content from uploaded files. Learn more about the file search tool.
A filter to apply.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
Combine multiple filters using and or or.
Array of filters to combine. Items can be ComparisonFilter or CompoundFilter.
A filter used to compare a specified attribute key to a given value using a defined comparison operation.
Specifies the comparison operator: eq, ne, gt, gte, lt, lte, in, nin.
eq: equalsne: not equalgt: greater thangte: greater than or equallt: less thanlte: less than or equalin: innin: not in
The maximum number of results to return. This number should be between 1 and 50 inclusive.
Ranking options for search.
Weights that control how reciprocal rank fusion balances semantic embedding matches versus sparse keyword matches when hybrid search is enabled.
A tool that controls a virtual computer. Learn more about the computer tool.
Search the Internet for sources related to the prompt. Learn more about the web search tool.
Filters for the search.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The approximate location of the user.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
Give the model access to additional tools via remote Model Context Protocol (MCP) servers. Learn more about MCP.
List of allowed tool names or a filter object.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
An OAuth access token that can be used with a remote MCP server, either with a custom MCP server URL or a service connector. Your application must handle the OAuth authorization flow and provide the token here.
Identifier for service connectors, like those available in ChatGPT. One of
server_url or connector_id must be provided. Learn more about service
connectors here.
Currently supported connector_id values are:
- Dropbox:
connector_dropbox - Gmail:
connector_gmail - Google Calendar:
connector_googlecalendar - Google Drive:
connector_googledrive - Microsoft Teams:
connector_microsoftteams - Outlook Calendar:
connector_outlookcalendar - Outlook Email:
connector_outlookemail - SharePoint:
connector_sharepoint
Optional HTTP headers to send to the MCP server. Use for authentication or other purposes.
Specify which of the MCP server's tools require approval.
Specify which of the MCP server's tools require approval. Can be
always, never, or a filter object associated with tools
that require approval.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A filter object to specify which tools are allowed.
Indicates whether or not a tool modifies data or is read-only. If an
MCP server is annotated with readOnlyHint,
it will match this filter.
A tool that runs Python code to help generate a response to a prompt.
The code interpreter container. Can be a container ID or an object that
specifies uploaded file IDs to make available to your code, along with an
optional memory_limit setting.
A tool that generates images using a model like gpt-image-1.
Background type for the generated image. One of transparent,
opaque, or auto. Default: auto.
Control how much effort the model will exert to match the style and features, especially facial features, of input images. This parameter is only supported for gpt-image-1. Unsupported for gpt-image-1-mini. Supports high and low. Defaults to low.
Optional mask for inpainting. Contains image_url
(string, optional) and file_id (string, optional).
The output format of the generated image. One of png, webp, or
jpeg. Default: png.
Number of partial images to generate in streaming mode, from 0 (default value) to 3.
A tool that allows the model to execute shell commands in a local environment.
A tool that allows the model to execute shell commands.
A custom tool that processes input using a specified format. Learn more about custom tools
The input format for the custom tool. Default is unconstrained text.
This tool searches the web for relevant results to use in a response. Learn more about the web search tool.
The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11.
High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default.
The user's location.
The two-letter ISO country code of the user, e.g. US.
The IANA timezone of the user, e.g. America/Los_Angeles.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Whether to run the model response in the background. Learn more.
The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation.
An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens.
The maximum number of total calls to built-in tools that can be processed in a response. This maximum number applies across all built-in tool calls, not per individual tool. Any further attempts to call a tool by the model will be ignored.
SDK-only convenience property that contains the aggregated text output
from all output_text items in the output array, if any are present.
Supported in the Python and JavaScript SDKs.
The unique ID of the previous response to the model. Use this to
create multi-turn conversations. Learn more about
conversation state. Cannot be used in conjunction with conversation.
Reference to a prompt template and its variables. Learn more.
Used by OpenAI to cache responses for similar requests to optimize your cache hit rates. Replaces the user field. Learn more.
The retention policy for the prompt cache. Set to 24h to enable extended prompt caching, which keeps cached prefixes active for longer, up to a maximum of 24 hours. Learn more.
gpt-5 and o-series models only
Configuration options for reasoning models.
Constrains effort on reasoning for
reasoning models.
Currently supported values are none, minimal, low, medium, and high. Reducing
reasoning effort can result in faster responses and fewer tokens used
on reasoning in a response.
gpt-5.1defaults tonone, which does not perform reasoning. The supported reasoning values forgpt-5.1arenone,low,medium, andhigh. Tool calls are supported for all reasoning values in gpt-5.1.- All models before
gpt-5.1default tomediumreasoning effort, and do not supportnone. - The
gpt-5-promodel defaults to (and only supports)highreasoning effort.
Deprecated: use summary instead.
A summary of the reasoning performed by the model. This can be
useful for debugging and understanding the model's reasoning process.
One of auto, concise, or detailed.
A stable identifier used to help detect users of your application that may be violating OpenAI's usage policies. The IDs should be a string that uniquely identifies each user. We recommend hashing their username or email address, in order to avoid sending us any identifying information. Learn more.
Specifies the processing type used for serving the request.
- If set to 'auto', then the request will be processed with the service tier configured in the Project settings. Unless otherwise configured, the Project will use 'default'.
- If set to 'default', then the request will be processed with the standard pricing and performance for the selected model.
- If set to 'flex' or 'priority', then the request will be processed with the corresponding service tier.
- When not set, the default behavior is 'auto'.
When the service_tier parameter is set, the response body will include the service_tier value based on the processing mode actually used to serve the request. This response value may be different from the value set in the parameter.
The status of the response generation. One of completed, failed,
in_progress, cancelled, queued, or incomplete.
Configuration options for a text response from the model. Can be plain text or structured JSON data. Learn more:
An object specifying the format that the model must output.
Configuring { "type": "json_schema" } enables Structured Outputs,
which ensures the model will match your supplied JSON schema. Learn more in the
Structured Outputs guide.
The default format is { "type": "text" } with no additional options.
Not recommended for gpt-4o and newer models:
Setting to { "type": "json_object" } enables the older JSON mode, which
ensures the message the model generates is valid JSON. Using json_schema
is preferred for models that support it.
Default response format. Used to generate text responses.
JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
A description of what the response format is for, used by the model to determine how to respond in the format.
Whether to enable strict schema adherence when generating the output.
If set to true, the model will always follow the exact schema defined
in the schema field. Only a subset of JSON Schema is supported when
strict is true. To learn more, read the Structured Outputs
guide.
JSON object response format. An older method of generating JSON responses.
Using json_schema is recommended for models that support it. Note that the
model will not generate JSON without a system or user message instructing it
to do so.
An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
The truncation strategy to use for the model response.
auto: If the input to this Response exceeds the model's context window size, the model will truncate the response to fit the context window by dropping items from the beginning of the conversation.disabled(default): If the input size will exceed the context window size for a model, the request will fail with a 400 error.
Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used.
A detailed breakdown of the input tokens.
The number of tokens that were retrieved from the cache. More on prompt caching.
A detailed breakdown of the output tokens.
This field is being replaced by safety_identifier and prompt_cache_key. Use prompt_cache_key instead to maintain caching optimizations.
A stable identifier for your end-users.
Used to boost cache hit rates by better bucketing similar requests and to help OpenAI detect and prevent abuse. Learn more.
1
2
3
4
5
6
7
8
9
10
{
"type": "response.queued",
"response": {
"id": "res_123",
"status": "queued",
"created_at": "2021-01-01T00:00:00Z",
"updated_at": "2021-01-01T00:00:00Z"
},
"sequence_number": 1
}response.custom_tool_call_input.delta
Event representing a delta (partial update) to the input of a custom tool call.
1
2
3
4
5
6
{
"type": "response.custom_tool_call_input.delta",
"output_index": 0,
"item_id": "ctc_1234567890abcdef",
"delta": "partial input text"
}response.custom_tool_call_input.done
Event indicating that input for a custom tool call is complete.
1
2
3
4
5
6
{
"type": "response.custom_tool_call_input.done",
"output_index": 0,
"item_id": "ctc_1234567890abcdef",
"input": "final complete input text"
}error
Emitted when an error occurs.
1
2
3
4
5
6
7
{
"type": "error",
"code": "ERR_SOMETHING",
"message": "Something went wrong",
"param": null,
"sequence_number": 1
}